672 research outputs found

    Do interventions to promote walking in groups increase physical activity? A meta-analysis.

    Get PDF
    OBJECTIVE: Walking groups are increasingly being set up but little is known about their efficacy in promoting physical activity. The present study aims to assess the efficacy of interventions to promote walking in groups to promoting physical activity within adults, and to explore potential moderators of this efficacy. METHOD: Systematic literature review searches were conducted using multiple databases. A random effect model was used for the meta-analysis, with sensitivity analysis. RESULTS: The effect of the interventions (19 studies, 4 572 participants) on physical activity was of medium size (d = 0.52), statistically significant (95%CI 0.32 to 0.71, p < 0.0001), and with large fail-safe of N = 753. Moderator analyses showed that lower quality studies had larger effect sizes than higher quality studies, studies reporting outcomes over six months had larger effect sizes than studies reporting outcomes up to six months, studies that targeted both genders had higher effect sizes than studies that targeted only women, studies that targeted older adults had larger effect sizes than studies that targeted younger adults. No significant differences were found between studies delivered by professionals and those delivered by lay people. CONCLUSION: Interventions to promote walking in groups are efficacious at increasing physical activity. Despite low homogeneity of results, and limitations (e.g. small number of studies using objective measures of physical activity, publication bias), which might have influence the findings, the large fail-safe N suggests these findings are robust. Possible explanations for heterogeneity between studies are discussed, and the need for more investigation of this is highlighted

    Fast global interactive volume segmentation with regional supervoxel descriptors

    Get PDF
    In this paper we propose a novel approach towards fast multi-class volume segmentation that exploits supervoxels in order to reduce complexity, time and memory requirements. Current methods for biomedical image segmentation typically require either complex mathematical models with slow convergence, or expensive-to-calculate image features, which makes them non-feasible for large volumes with many objects (tens to hundreds) of different classes, as is typical in modern medical and biological datasets. Recently, graphical models such as Markov Random Fields (MRF) or Conditional Random Fields (CRF) are having a huge impact in different computer vision areas (e.g. image parsing, object detection, object recognition) as they provide global regularization for multiclass problems over an energy minimization framework. These models have yet to find impact in biomedical imaging due to complexities in training and slow inference in 3D images due to the very large number of voxels. Here, we define an interactive segmentation approach over a supervoxel space by first defining novel, robust and fast regional descriptors for supervoxels. Then, a hierarchical segmentation approach is adopted by training Contextual Extremely Random Forests in a user-defined label hierarchy where the classification output of the previous layer is used as additional features to train a new classifier to refine more detailed label information. This hierarchical model yields final class likelihoods for supervoxels which are finally refined by a MRF model for 3D segmentation. Results demonstrate the effectiveness on a challenging cryo-soft X-ray tomography dataset by segmenting cell areas with only a few user scribbles as the input for our algorithm. Further results demonstrate the effectiveness of our method to fully extract different organelles from the cell volume with another few seconds of user interaction. Β© (2016) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only

    SMURFS: superpixels from multi-scale refinement of super-regions

    Get PDF
    Recent applications in computer vision have come to rely on superpixel segmentation as a pre-processing step for higher level vision tasks, such as object recognition, scene labelling or image segmentation. Here, we present a new algorithm, Superpixels from MUlti-scale ReFinement of Super-regions (SMURFS), which not only obtains state-of-the-art superpixels, but can also be applied hierarchically to form what we call n-th order super-regions. In essence, starting from a uniformly distributed set of super-regions, the algorithm iteratively alternates graph-based split and merge optimization schemes which yield superpixels that better represent the image. The split step is performed over the pixel grid to separate large super-regions into different smaller superpixels. The merging process, conversely, is performed over the superpixel graph to create 2nd-order super-regions (super-segments). Iterative refinement over two scale of regions allows the algorithm to achieve better over-segmentation results than current state-of-the-art methods, as experimental results show on the public Berkeley Segmentation Dataset (BSD500)

    The role of walkers' needs and expectations in supporting maintenance of attendance at walking groups: a longitudinal multi-perspective study of walkers and walk group leaders.

    Get PDF
    BACKGROUND: There is good evidence that when people's needs and expectations regarding behaviour change are met, they are satisfied with that change, and maintain those changes. Despite this, there is a dearth of research on needs and expectations of walkers when initially attending walking groups and whether and how these needs and expectations have been satisfied after a period of attendance. Equally, there is an absence of research on how people who lead these groups understand walkers' needs and walk leaders' actions to address them. The present study was aimed at addressing both of these gaps in the research. METHODS: Two preliminary thematic analyses were conducted on face-to-face interviews with (a) eight walkers when they joined walking groups, five of whom were interviewed three months later, and (b) eight walk leaders. A multi-perspective analysis building upon these preliminary analyses identified similarities and differences within the themes that emerged from the interviews with walkers and walk leaders. RESULTS: Walkers indicated that their main needs and expectations when joining walking groups were achieving long-term social and health benefits. At the follow up interviews, walkers indicated that satisfaction with meeting similar others within the groups was the main reason for continued attendance. Their main source of dissatisfaction was not feeling integrated in the existing walking groups. Walk leaders often acknowledged the same reasons for walkers joining and maintaining attendance at walking. However, they tended to attribute dissatisfaction and drop out to uncontrollable environmental factors and/or walkers' personalities. Walk leaders reported a lack of efficacy to effectively address walkers' needs. CONCLUSIONS: Interventions to increase retention of walkers should train walk leaders with the skills to help them modify the underlying psychological factors affecting walkers' maintenance at walking groups. This should result in greater retention of walkers in walking groups, thereby allowing walkers to receive the long-term social and health benefits of participation in these groups

    Towards infield, live plant phenotyping using a reduced-parameter CNN

    Get PDF
    Β© 2019, The Author(s). There is an increase in consumption of agricultural produce as a result of the rapidly growing human population, particularly in developing nations. This has triggered high-quality plant phenotyping research to help with the breeding of high-yielding plants that can adapt to our continuously changing climate. Novel, low-cost, fully automated plant phenotyping systems, capable of infield deployment, are required to help identify quantitative plant phenotypes. The identification of quantitative plant phenotypes is a key challenge which relies heavily on the precise segmentation of plant images. Recently, the plant phenotyping community has started to use very deep convolutional neural networks (CNNs) to help tackle this fundamental problem. However, these very deep CNNs rely on some millions of model parameters and generate very large weight matrices, thus making them difficult to deploy infield on low-cost, resource-limited devices. We explore how to compress existing very deep CNNs for plant image segmentation, thus making them easily deployable infield and on mobile devices. In particular, we focus on applying these models to the pixel-wise segmentation of plants into multiple classes including background, a challenging problem in the plant phenotyping community. We combined two approaches (separable convolutions and SVD) to reduce model parameter numbers and weight matrices of these very deep CNN-based models. Using our combined method (separable convolution and SVD) reduced the weight matrix by up to 95% without affecting pixel-wise accuracy. These methods have been evaluated on two public plant datasets and one non-plant dataset to illustrate generality. We have successfully tested our models on a mobile device

    Recognizing the presence of hidden visual markers in digital images

    Get PDF
    As the promise of Virtual and Augmented Reality (VR and AR) becomes more realistic, an interesting aspect of our enhanced living environment includes the availability β€” indeed the potential ubiquity β€” of scannable markers. Such markers could represent an initial step into the AR and VR worlds. In this paper, we address the important question of how to recognise the presence of visual markers in freeform digital photos. We use a particularly challenging marker format that is only minimally constrained in structure, called Artcodes. Artcodes are a type of topological marker system enabling people, by following very simple drawing rules, to design markers that are both aesthetically beautiful and machine readable. Artcodes can be used to decorate the surface of any objects, and yet can also contain a hidden digital meaning. Like some other more commonly used markers (such as Barcodes, QR codes), it is possible to use codes to link physical objects to digital data, augmenting everyday objects. Obviously, in order to trigger the behaviour of scanning and further decoding of such codes, it is first necessary for devices to be aware of the presence of Artcodes in the image. Although considerable literature exists related to the detection of rigidly formatted structures and geometrical feature descriptors such as Harris, SIFT, and SURF, these approaches are not sufficient for describing freeform topological structures, such as Artcode images. In this paper, we propose a new topological feature descriptor that can be used in the detection of freeform topological markers, including Artcodes. This feature descriptor is called a Shape of Orientation Histogram (SOH). We construct this SOH feature vector by quantifying the level of symmetry and smoothness of the orientation histogram, and then use a Random Forest machine learning approach to classify images that contain Artcodes using the new feature vector. This system represents a potential first step for an eventual mobile device application that would detect where in an image such an unconstrained code appears. We also explain how the system handles imbalanced datasets β€” important for rare, handcrafted codes such as Artcodes β€” and how it is evaluated. Our experimental evaluation shows good performance of the proposed classification model in the detection of Artcodes: obtaining an overall accuracy of approx. 0.83, F2 measure 0.83, MCC 0.68, AUC-ROC 0.93, and AUC-PR 0.91

    Automated recovery of 3D models of plant shoots from multiple colour images

    Get PDF
    Increased adoption of the systems approach to biological research has focussed attention on the use of quantitative models of biological objects. This includes a need for realistic 3D representations of plant shoots for quantification and modelling. Previous limitations in single or multi-view stereo algorithms have led to a reliance on volumetric methods or expensive hardware to record plant structure. We present a fully automatic approach to image-based 3D plant reconstruction that can be achieved using a single low-cost camera. The reconstructed plants are represented as a series of small planar sections that together model the more complex architecture of the leaf surfaces. The boundary of each leaf patch is refined using the level set method, optimising the model based on image information, curvature constraints and the position of neighbouring surfaces. The reconstruction process makes few assumptions about the nature of the plant material being reconstructed, and as such is applicable to a wide variety of plant species and topologies, and can be extended to canopy-scale imaging. We demonstrate the effectiveness of our approach on datasets of wheat and rice plants, as well as a novel virtual dataset that allows us to compute quantitative measures of reconstruction accuracy. The output is a 3D mesh structure that is suitable for modelling applications, in a format that can be imported in the majority of 3D graphics and software packages

    Connecting Everyday Objects with the Metaverse: A Unified Recognition Framework

    Full text link
    The recent Facebook rebranding to Meta has drawn renewed attention to the metaverse. Technology giants, amongst others, are increasingly embracing the vision and opportunities of a hybrid social experience that mixes physical and virtual interactions. As the metaverse gains in traction, it is expected that everyday objects may soon connect more closely with virtual elements. However, discovering this "hidden" virtual world will be a crucial first step to interacting with it in this new augmented world. In this paper, we address the problem of connecting physical objects with their virtual counterparts, especially through connections built upon visual markers. We propose a unified recognition framework that guides approaches to the metaverse access points. We illustrate the use of the framework through experimental studies under different conditions, in which an interactive and visually attractive decoration pattern, an Artcode, is used as the approach to enable the connection. This paper will be of interest to, amongst others, researchers working in Interaction Design or Augmented Reality who are seeking techniques or guidelines for augmenting physical objects in an unobtrusive, complementary manner.Comment: This paper includes 6 pages, 4 figures, and 1 table, and has been accepted to be published by the 2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC), Los Alamitos, CA, US
    • …
    corecore